Ollama Local Setup and Spring AI Integration
π Mastering Ollama: Install, Run & Connect with Spring AI π€β
Want to bring AI magic to your local machine? Buckle up! This guide will walk you through installing Ollama, running powerful LLMs, and connecting them with Spring AI. It's like OpenAI's GPT, but you run it yourself! Let's dive in! π
π§ What is Ollama?β
Ollama is your AI sidekick that lets you run large language models (LLMs) right on your computer. Think of it as Docker, but for AI models! π οΈ Instead of spinning up databases or message queues, Ollama lets you effortlessly download, install, and chat with AI models like LLaMA-2, Mistral, CodeLLaMA and DeepSeek-R1. You can even fine-tune them for your needs. Cool, right? π
π§ Installing Ollamaβ
Ollama is easy to set up, and you have three ways to get it running.
1οΈβ£ One-Click Installer (Easiest!) π±οΈβ
For Windows & Mac users who love simplicity.
β Steps:
-
Visit Ollama's website π
-
Smash that Download button π₯
-
Run the installer (.exe for Windows, .dmg for Mac)
-
Follow the on-screen setup wizard π§
-
Once installed, fire it up with:
ollama serve
π Done! Easy, right?
2οΈβ£ Command Line Installation (For the Techies) π₯οΈβ
Perfect for Linux users who love typing commands. π»
curl -fsSL https://ollama.com/install.sh | sh
ollama serve
π Boom! You're up and running.
3οΈβ£ Running Ollama in Docker (For the Container Wizards) π³β
Want to isolate Ollama in a container? Here's how:
docker pull ollama/ollama
docker run -d --gpus all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
π If you have multiple GPUs, replace all
with the specific GPU ID.
Start it with:
docker start ollama
ollama serve
And you're ready to roll! π’
π€ Download & Run a Modelβ
Time to get your hands dirty! Let's grab a powerful AI model and chat with it. π£οΈ
ollama run gemma2
This command downloads the Gemma2 model (if you don't have it already) and gets it ready for action. Just type your prompts, and let AI do the magic! β¨
π Connecting to Ollama API (AI in Your Apps!)β
Ollama provides a REST API at http://localhost:11434
, making it super easy to integrate with your apps.
π‘ Try this out!
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "Why is the sky blue?"
}'
π You can also use Postman or any HTTP client to test it.
β‘ Using Ollama with Spring AIβ
Just like OpenAI, Spring AI makes it easy to talk to Ollama. Here's how you can integrate it into your Java project.
π¦ 1. Add Maven Dependencyβ
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-ollama-spring-boot-starter</artifactId>
</dependency>
π§ 2. Configure Base URL & Model Nameβ
By default, Spring AI assumes localhost:11434 with model mistral
. You can tweak it like this:
spring.ai.ollama.base-url=http://localhost:11434
spring.ai.ollama.chat.options.model=gemma
spring.ai.ollama.chat.options.temperature=0.4
Or in Java:
@Bean
OllamaChatModel ollamaChatModel(@Value("spring.ai.ollama.base-url") String baseUrl) {
return new OllamaChatModel(new OllamaApi(baseUrl),
OllamaOptions.create()
.withModel("gemma")
.withTemperature(0.4f));
}
π¬ 3. Send AI Prompts & Get Responsesβ
Want to chat with your AI model? Easy!
@Autowired
OllamaChatModel chatModel;
chatModel.stream(new Prompt(
"Generate the names of 5 famous pirates.",
OllamaOptions.create()
.withModel("gemma2")
.withTemperature(0.4F)
)).subscribe(chatResponse -> {
System.out.print(chatResponse.getResult().getOutput().getContent());
});
Or use synchronous calls:
ChatResponse response = chatModel.call(
new Prompt(
"Generate the names of 5 famous pirates.",
OllamaOptions.create()
.withModel("gemma2")
.withTemperature(0.4F)
));
response.getResults().stream()
.map(generation -> generation.getOutput().getContent())
.forEach(System.out::println);
π Now your Spring Boot app can talk to an AI model just like ChatGPT!
π― Wrapping Upβ
In this guide, we: β Installed Ollama in multiple ways π οΈ β Downloaded & ran an AI model π€ β Connected to the Ollama API π β Integrated it with Spring AI π
Ollama is a game-changer for local AI development. Whether you're building chatbots, AI-powered apps, or fine-tuning models, you're now ready to conquer the AI world! ππ‘
Happy Coding! ππ₯